Skip to content

Conversation

@spyke7
Copy link

@spyke7 spyke7 commented Dec 27, 2025

Hi @orbeckst
I have added OpenVDB.py inside gridData that simply export files in .vdb format. Also I have added test_vdb.py inside tests and it successfully passes.
fix #141

Required Libraries -
openvdb

  • conda install -c conda-forge openvdb

There are many things that need to be updated like docs, etc, but I have just provided the file and test so that you can review it, and I can fix the problems. Please let me know if anything needs to be changed and updated.

@codecov
Copy link

codecov bot commented Dec 27, 2025

Codecov Report

❌ Patch coverage is 92.00000% with 4 lines in your changes missing coverage. Please review.
✅ Project coverage is 88.41%. Comparing base (b29c1f4) to head (c8e643d).

Files with missing lines Patch % Lines
gridData/OpenVDB.py 95.12% 2 Missing ⚠️
gridData/core.py 71.42% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #148      +/-   ##
==========================================
+ Coverage   88.20%   88.41%   +0.20%     
==========================================
  Files           5        6       +1     
  Lines         814      863      +49     
  Branches      107      115       +8     
==========================================
+ Hits          718      763      +45     
- Misses         56       59       +3     
- Partials       40       41       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@spyke7
Copy link
Author

spyke7 commented Dec 27, 2025

@orbeckst , please review the OpenVDB.py file. After that, I will add some more test covering all the missing parts

@orbeckst
Copy link
Member

orbeckst commented Dec 27, 2025 via email

Copy link
Member

@orbeckst orbeckst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution. Before going further, can you please try your own code and demonstrate that it works? For instance, take some of the bundled test files such as 1jzv.ccp4 or nAChR_M2_water.plt, write it to OpenVDB, load it in blender, and show an image of the rendered density?

Once we know that it's working in principle, we'll need proper tests (you can look at PR #147 for good example of minimal testing for writing functionality).

CHANGELOG Outdated
Comment on lines 24 to 26
Fixes

* Adding openVDB formats (Issue #141)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not a fix but an Enhancement – put it into the existing 1.1.0 section and add you name there.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the CHANGELOG, this PR and issue are in the 1.1.0 release, so should I add my name in the 1.1.0 release or remove those lines and put them in the new section?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, now move it to the new section above since we released 1.1.0.

Comment on lines 183 to 188
for i in range(self.grid.shape[0]):
for j in range(self.grid.shape[1]):
for k in range(self.grid.shape[2]):
value = float(self.grid[i, j, k])
if abs(value) > threshold:
accessor.setValueOn((i, j, k), value)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks really slow — iterating over a grid explicitly. For a start, you can find all cells above a threshold with numpy operations (np.abs(g) > threshold) and then ideally use it in a vectorized form to set the accessor.

@orbeckst orbeckst self-assigned this Jan 9, 2026
@spyke7 spyke7 requested a review from orbeckst January 18, 2026 06:48
@spyke7
Copy link
Author

spyke7 commented Jan 18, 2026

fixed the CHANGELOG and OpenVDB.py. I didn't get the time to work on the blender part due to exams. I will surely try do it!

@spyke7
Copy link
Author

spyke7 commented Jan 18, 2026

Screenshot (125) Screenshot (126) Screenshot (128) Screenshot (129)

The first two are for naChR_M2_water.vdb and the last two are for 1jzv.vdb.
Also in the OpenVDB.py, the function should be transform.preTranslate which I will fix with the new tests.
Can you please confirm that these are the correct rendering?
I can provide the .vdb files as well here.

@orbeckst
Copy link
Member

Good that you're able to load something into Blender. From a first glance I don;t recognize what I'd expect but this may be dependent on how you render in Blender. As I already said on Discord: Try to establish yourself what "correct" means. Load the original data in a program where you can reliably look at it. ChimeraX is probably the best for looking at densities; it can definitely read DX.

Btw, the M2 density should look similar to the blue "blobs" on the cover of https://sbcb.bioch.ox.ac.uk/users/oliver/download/Thesis/OB_thesis_2sided.pdf

@spyke7
Copy link
Author

spyke7 commented Jan 19, 2026

Screenshot (131) Screenshot (132)

The first one is for 1jzv.vdb and second for nAChR_M2_water.vdb (not done the shading/coloring)
I think the .vdb files as generated by the OpenVDB.py are now correctly rendering in blender.

Can I proceed with the tests part?

@BradyAJohnston
Copy link
Member

Mentioned in the Discord but also bringing up here: In your current examples (most obvious with the pore) is that the axis is flipped so that X is "up" compared to atomic coordinates which would have Z as up.

@spyke7
Copy link
Author

spyke7 commented Jan 19, 2026

Mentioned in the Discord but also bringing up here: In your current examples (most obvious with the pore) is that the axis is flipped so that X is "up" compared to atomic coordinates which would have Z as up.

Thank you for the update! will try to fix this

@spyke7
Copy link
Author

spyke7 commented Jan 19, 2026

Screenshot (133) Screenshot (136)

I think this fixes the axis..

@BradyAJohnston
Copy link
Member

Ideally we would see this alongside the atoms or density from MN as well - to double check alignment because you might also need to flip one of the X or Y axes.

@BradyAJohnston
Copy link
Member

The scales might be different (larger or smaller by factors of 10) but you can just scale inside of Blender by that amount to align the scales, but we want to be double checking alignemnt and axes.

@spyke7
Copy link
Author

spyke7 commented Jan 20, 2026

Hi @BradyAJohnston
Screenshot (138)
Screenshot (139)

I have first of all added the MolecularNode add-on as given in the https://github.com/BradyAJohnston/MolecularNodes, and imported the 1jzv.pdb. After that import the .vdb file and there was difference in size of two. So I made the size the .pdb bigger. The centers of both of them are same and I didn't flipped any of the axes in the ss provided.

I wrote a small blender py script to compare bounding boxes of the pdb and vdb objects to verify centroids, extents and axis alignment-

import bpy
from mathutils import Vector

def bbox_world(obj):
    bbox = [obj.matrix_world @ Vector(c) for c in obj.bound_box]
    mn = Vector((min(p[i] for p in bbox) for i in range(3)))
    mx = Vector((max(p[i] for p in bbox) for i in range(3)))
    return mn, mx

def centroid_world(obj):
    mn, mx = bbox_world(obj)
    return (mn + mx) / 2.0

def size_world(obj):
    mn, mx = bbox_world(obj)
    return mx - mn

pdb = bpy.data.objects.get("1jzv.001")
vdb = bpy.data.objects.get("1jzv")

print("pdb centroid:", centroid_world(pdb))
print("pdb size:", size_world(pdb))
print("vdb centroid:", centroid_world(vdb))
print("vdb size:", size_world(vdb))

output -
pdb centroid: <Vector (7.6985, 23.7885, 76.0560)>
pdb size: <Vector (33.1410, 45.4170, 29.3960)>
vdb centroid: <Vector (8.7238, 23.4452, 76.7628)>
vdb size: <Vector (43.6190, 52.3429, 40.3425)>

The centroids are almost same I guess...
the data seems to be correctly aligned

@BradyAJohnston BradyAJohnston self-assigned this Jan 20, 2026
@BradyAJohnston
Copy link
Member

@spyke7 It's still not 100% clear from your screenshots - can you import with the pore instead as that is more clear? And when you are taking a screenshot it would be more helpful to have the imported density in the centre of the screen rather than mostly empty space.

@PardhavMaradani
Copy link

Looks like you are attempting a standalone export to .vdb files from GridDataFormats. (If your end use case is to use this only within Blender, I'd strongly recommend using MolecularNodes to import various grid formats as it already uses GridDataFormats internally and provides a lot of cool features like varying ISO values, different colors for positive and negative ISO values, slicing along all three major axes, showing contours, centering, inverting etc - both from GUI and API) From a quick scan of the code, you seem to want to support both pyopenvdb (the older one) and openvdb (the newer one) - note that there are some minor differences to take into account between them. You can take a look at the grid_to_vdb method from an earlier version in MN that shows the differences and handles the export to .vdb within MolecularNodes. Hope this helps. Thanks

@BradyAJohnston
Copy link
Member

If this functionality can be added directly to GDF then we can also take advantage of that in MN going forwards.

@PardhavMaradani
Copy link

If this functionality can be added directly to GDF then we can also take advantage of that in MN going forwards.

Agreed. In addition to exporting to .vdb format, we also add some additional metadata (currently, info about inversion, centered) that we later use. So as long as the metadata for Grids is carried over during export, we should probably be good. Thanks

@BradyAJohnston
Copy link
Member

In addition to exporting to .vdb format, we also add some additional metadata

This is a good point and something to consider as well. As far as I am aware Blender / MN (and other 3D animation packages) might be the only ones who use .vdb as a format rather than any scientific packages / pipelines.

If there is anything out there that does take .vdb then we might want to consider if any relevant metadata should be saved. We might want to standardise on relevant metadata entries (we could either re-use from MN or update inside of MN to more general ones) so that GDF interactions with .vdb attempt to approach some kind of standard. This might be a larger question outside of scope for a simple read / write, but certainly functionality to pass in custom metadata like we do in MN would be ideal.

@orbeckst
Copy link
Member

@spyke7 – what have you done so far to convince yourself that, say the M2 density, was correctly processed by your code? It's a vital skill to come up with your own validation and not rely on others to tell you that you're right.

@spyke7
Copy link
Author

spyke7 commented Jan 22, 2026

@spyke7 – what have you done so far to convince yourself that, say the M2 density, was correctly processed by your code? It's a vital skill to come up with your own validation and not rely on others to tell you that you're right.

Till now one of the main problems were whether the .vdb as exported were in the correct alignment, and whether the main .plt/.ccp4 were overlapping correctly or not. After seeing the screenshots as provided by @PardhavMaradani I understood that there were problem with the axes and rotated it, and after that I imported the M2 density using the MN add on and imported the exported .vdb made by the OpenVDB.py to check whether they are same in orientation and overlapping or not .(the screenshots provided above) and I think they are overlapping

I have used the MN add-on by @BradyAJohnston to crosscheck this thing. And was asking for some final advice. Only thing is that I need to scale the imported M2 density .plt file to the size of the .vdb, eitherwise the origin are almost same.
but I’m still learning here and would appreciate confirmation or suggestions for more robust validation steps
(I’m still getting up to speed with molecular dynamics data conventions, so I appreciate any guidance here.)

@spyke7
Copy link
Author

spyke7 commented Jan 23, 2026

Recording.2026-01-23.140146.mp4

So in this video I have demonstrated how I made the .vdb as exported by OpenVDB.py to look similar to the pictures as provided in discord.

@spyke7
Copy link
Author

spyke7 commented Jan 23, 2026

Recently I have checked the OpenVDB.py on emd_2984.map
Downloaded in map format and just renamed the extension to mrc. After that imported as density in MN-add on and checked it was showing correctly. Then in the same way I exported the emd_2984.mrc to .vdb format and does the same as in video. It was showing same as the MN-add on.

So overall I think that OpenVDB.py is working fine.

@spyke7 spyke7 requested a review from BradyAJohnston January 25, 2026 17:52
Copy link
Member

@orbeckst orbeckst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I managed to look through 2/3 of the PR, please see comments.

Please also show screen shots that demonstrate that your code and the existing MN plugin produce the same output in blender.

CHANGELOG Outdated
??/??/???? orbeckst, BradyAJohnston, spyke7

* 1.1.1
* 1.1.1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Under semantic versioning we will need to make this a 1.2.0. Please change:

Suggested change
* 1.1.1
* 1.2.0

(also make sure that indentation is as previously)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@spyke7 why did you include @BradyAJohnston as an author?

This may certainly be justified but please explain. Adding any kind of authorship information should not be added without consent and explicit justification.

Copy link
Author

@spyke7 spyke7 Jan 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at previous changelogs, I thought that in authors part all the member who have reviewed the code or are assigned should be added, i.e, why I added. I shouldn't have added it without asking, I will remove then.

The OpenVDB format is used by Blender and other VFX software for
volumetric data. See https://www.openvdb.org

pyopenvdb: https://github.com/AcademySoftwareFoundation/openvdb
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of bare links, write a sentence that states that this module uses pyopenvdb (and then link to it).


# Populate the grid
accessor = vdb_grid.getAccessor()
threshold = 1e-10
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where does 1e-10 come from?

This could be a kwarg for __init__ and stored as self.threshold in case users might actually want to customize.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that in OpenVDB, does not store zero unless asked to. So values<threshold are treated as background.
Basically, it is there so that it does not write all tiny voxels which are less than this. Or else, the size will get bigger

gridData/core.py Outdated
"""
if self.grid.ndim != 3:
raise ValueError(
"OpenVDB export requires a 3D grid, got {}D".format(self.grid.ndim))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use f-string

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still uses format.

@PardhavMaradani
Copy link

Some thoughts:

  • I don't think the goal should be to have the OpenVDB exporter produce a .vdb directly that matches/aligns with what MN creates
    • GDF should be independent of MN and Blender
    • Blender has a Z-up axis orientation and MN uses a custom world scale and these aren't necessarily the same with other tools
    • Due to the above, the exporter should not perform any hardcoded transformations. Instead, any such transformations (like translating, scaling, etc) should be format-specific kwargs in the exporter
  • Ideally, both an import and export support go hand in hand
    • Having an import back from .vdb to write to other formats to visualize in other tools would ensure self validation
    • I only provided the MN reference images because that is the only one I am familiar with and I don't have experience with other tools. Apologies if I misled in any way
    • MN would also benefit from being able to import .vdb files directly similar to other density files
  • Exported .vdb files retaining custom metadata that GDF grids already supports (along with the ability to transform as described above) is a must for MN to be able to use this exporter
  • From a quick look at the code, it seems a lot more complex than what we have in MN. I am not too familiar with the licensing issues etc, so I will let Brady add any thoughts about this

@spyke7 spyke7 requested a review from orbeckst January 26, 2026 14:54
@spyke7
Copy link
Author

spyke7 commented Jan 26, 2026

  • GDF should be independent of MN and Blender
  • Blender has a Z-up axis orientation and MN uses a custom world scale and these aren't necessarily the same with other tools
  • Due to the above, the exporter should not perform any hardcoded transformations. Instead, any such transformations (like translating, scaling, etc) should be format-specific kwargs in the exporter

Yes, when I import the .vdb as export by the OpenVDB.py it is different in scale than the one import by the MN-add on. Also, I need to add the geometry node as I showed in the video, and change the threshold which is ISO in this case, to make it look like same as that of MN-add on

@spyke7
Copy link
Author

spyke7 commented Jan 26, 2026

Please also show screen shots that demonstrate that your code and the existing MN plugin produce the same output in blender.

if I can make a video showing all the things about importing the .vdb as exported by OpenVDB.py and then importing the MN-add on and making it to the same scale, so that they are overlapping that would be better.

@orbeckst
Copy link
Member

If a video is necessary then show it. Please also write out (in text) the individual steps that you are carrying out in the video.

(Videos just take a lot of time to watch and are often ambiguous. Text is much faster to digest and more succinct because the writer (hopefully) took the time to think about how to clearly convey the message.)

@orbeckst
Copy link
Member

@PardhavMaradani many thanks for your input:

I don't think the goal should be to have the OpenVDB exporter produce a .vdb directly that matches/aligns with what MN creates

  • GDF should be independent of MN and Blender

Yes, I agree, the goal here is to have something that works generically and would be easy for MN to adopt in order to reduce code-duplication.

  • Blender has a Z-up axis orientation and MN uses a custom world scale and these aren't necessarily the same with other tools Due to the above, the exporter should not perform any hardcoded transformations. Instead, any such transformations (like translating, scaling, etc) should be format-specific kwargs in the exporter

I agree. Does the current PR hard-code any of these transformations?

Can you give an example of what this should look like?

Ideally, both an import and export support go hand in hand

  • Having an import back from .vdb to write to other formats to visualize in other tools would ensure self validation.

I'd be ok to have this PR only export and raise a new issue for reading VDB files, which looks very doable, given that @spyke7 is already using some of this openvdb functionality in the tests (which is good!)

MN would also benefit from being able to import .vdb files directly similar to other density files

That's good motivation. Maybe @spyke7 would be interested to work on it after this PR?

Output filename (should end in .vdb)

"""
self.grid=numpy.ascontiguousarray(self.grid, dtype=numpy.float32)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this need to be done, do it in __init__. It's confusing to change attributes as a side-effect of writing.

]

vdb_grid.background = 0.0
vdb_grid.transform = vdb.createLinearTransform(matrix)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume that the transformation is required to make the VDB grid to have the correct origin and delta in general.

Or is this something specific to the MN blender use, @PardhavMaradani ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or is this something specific to the MN blender use, @PardhavMaradani ?

I addressed this in the comment below. Thanks

gridData/core.py Outdated
"""
if self.grid.ndim != 3:
raise ValueError(
"OpenVDB export requires a 3D grid, got {}D".format(self.grid.ndim))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still uses format.


grids, metadata = vdb.readAll(outfile)
assert len(grids) == 1
assert grids[0].name == 'density'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please assert more about the grid:

  • shape
  • values

Comment on lines 114 to 121
def test_write_vdb_nonuniform_spacing_warning(self, tmpdir):
data = np.ones((3, 3, 3), dtype=np.float32)
delta = np.array([0.5, 1.0, 1.5])
g = Grid(data, origin=[0, 0, 0], delta=delta)

outfile = str(tmpdir / "nonuniform.vdb")
g.export(outfile)
assert tmpdir.join("nonuniform.vdb").exists()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are we testing here?

Why does the name contain "warning"?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this only checks if the file exists with non-uniform delta

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in the end I can check whether each axis has correct spacing. I will update it


outfile = str(tmpdir / "matrix_delta.vdb")
vdb_field.write(outfile)
assert tmpdir.join("matrix_delta.vdb").exists()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert values, namely that the deltas were correctly processed


assert tmpdir.join("sparse.vdb").exists()
grids, metadata = vdb.readAll(outfile)
assert len(grids) == 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert values – some content of the object, e.g. pull out grids[0][2,3,4] and show that it's the same as data[2, 3, 4]

g = Grid(data, origin=[0, 0, 0], delta=[1, 1, 1])
outfile = str(tmpdir / "threshold.vdb")
g.export(outfile)
assert tmpdir.join("threshold.vdb").exists()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert values

Comment on lines 195 to 201
@pytest.mark.skipif(HAS_OPENVDB, reason="Testing import error handling")
def test_vdb_import_error():
with pytest.raises(ImportError, match="pyopenvdb is required"):
gridData.OpenVDB.OpenVDBField(
np.ones((3, 3, 3)),
origin=[0, 0, 0],
delta=[1, 1, 1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should be able to test the import handling when the test suite is being run. Look into mocking https://docs.python.org/3/library/unittest.mock.html (and there may also be something for pytest) – basically, make it so that just when this test function is run, the openvdb module is not imported.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have not done this in the recent push. Will be doing it!

@PardhavMaradani
Copy link

Does the current PR hard-code any of these transformations?

There is a hard-coded linear transformation in the current code as you pointed out in a review comment above. It is unclear to me what it does - @spyke7 could you please help explain (maybe I'm reading this incorrectly, if it is a combined scale and translation matrix, shouldn't the translations be in the 4th column and not row?)

Can you give an example of what this should look like?

This exporter could have additional args (similar to DX) that control the export behavior. Something like:

def _export_vdb(
    self,
    filename,
    center: bool = False,
    scale: float = 1.0,
    metadata: dict = None,
    ...
):
    ...

The center param could control whether the grid is centered around origin or left untouched and the scale param to scale the grid (OpenVDB scaling, not GDF grid resampling). scale/preScale and translate/postTranslate in pyopenvdb/openvdb respectively can be used. A combined matrix can also be used with createLinearTransform for the respective cases to avoid library specific methods if that works.

Ideally, any grid metadata should be exported as OpenVDB metadata as well. It would also help to have this be specified at export time as well. All of the above are generic and useful for any tools. MN requires all of the above to use this exporter as a drop-in replacement.

I am not sure if the threshold param (currently part of the __init__, but not exposed in the export) is needed. Looks like it is used to set the tolerance value of copyFromArray method - maybe it should be renamed accordingly. If it is needed, it should be exposed in a similar way to the params above. Thanks

@spyke7
Copy link
Author

spyke7 commented Jan 27, 2026

There is a hard-coded linear transformation in the current code as you pointed out in a review comment above. It is unclear to me what it does - @spyke7 could you please help explain (maybe I'm reading this incorrectly, if it is a combined scale and translation matrix, shouldn't the translations be in the 4th column and not row?)

Yes. This is a row major matrix (used in the code). The one you are saying is the column major matrix where the translation is in the 4th column.
The column major matrix is used in Opengl (I don't know much about this, but it is used in Opengl). openvdb required row major matrix (createLinearTransform). I have previously checked the matrix separately, changed the matrix to column major, i.e, in the 4th column the translation is present. The createLinearTransform gave an ArithmeticError: Tried to initialize an affine transform from a non-affine 4x4 matrix.

@spyke7
Copy link
Author

spyke7 commented Jan 27, 2026

Maybe @spyke7 would be interested to work on it after this PR?

Sure!

@spyke7
Copy link
Author

spyke7 commented Jan 28, 2026

I am not sure if the threshold param (currently part of the __init__, but not exposed in the export) is needed. Looks like it is used to set the tolerance value of copyFromArray method - maybe it should be renamed accordingly. If it is needed, it should be exposed in a similar way to the params above. Thanks

yes, threshold is used to set the tolerance.

@spyke7
Copy link
Author

spyke7 commented Jan 29, 2026

This exporter could have additional args (similar to DX) that control the export behavior. Something like:

def _export_vdb(
    self,
    filename,
    center: bool = False,
    scale: float = 1.0,
    metadata: dict = None,
    ...
):
    ...

The center param could control whether the grid is centered around origin or left untouched and the scale param to scale the grid (OpenVDB scaling, not GDF grid resampling). scale/preScale and translate/postTranslate in pyopenvdb/openvdb respectively can be used

So for example if I use, g.export(str(output_file), scale=0.01), I have passed scale=0.01, but the export() function as present in core.py will reject that. So I need to add **kwargs as well in the export - def export(self, filename, file_format=None, type=None, typequote='"', **kwargs):

and inside the function change, exporter(filename, type=type, typequote=typequote, **kwargs)

if this can be done, then we can pass scale, tolerance and center. Should I do it @orbeckst as it is present in the comments that it can process kwargs but not required to do so...

@orbeckst
Copy link
Member

orbeckst commented Jan 29, 2026

Yes, sort of: you need to add additional explicit keyword arguments to the top-level export() method and then add the specific keywords to the _export_vdb() method; still keep the **kwargs as this will swallow all other keywords that are not relevant for vdb.

@orbeckst
Copy link
Member

See #149 (comment) for a discussion for why we want to have explicit keywords.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

add OpenVDB format

4 participants